Goto

Collaborating Authors

 context length


DeepSeek promises its new AI model has 'world-class' reasoning

Engadget

DeepSeek promises its new AI model has'world-class' reasoning The new models give users access to a'cost effective 1 million context length.' DeepSeek has released its latest AI models, the V4 Pro and Flash versions, a bit over a year after it went viral and became the top rated free app on Apple's App Store in the US. "Welcome to the era of cost-effective 1 million context length," DeepSeek said in its announcement . Context length is what you call the maximum number of tokens that an AI model can remember, so the bigger it is, the more coherent and consistent an AI is when it comes to extended conversations. OpenAI's recently announced GPT 5.5 has a context window ranging from 400,000 to 1 million, for instance.


In-Place Test-Time Training

Feng, Guhao, Luo, Shengjie, Hua, Kai, Zhang, Ge, He, Di, Huang, Wenhao, Cai, Tianle

arXiv.org Machine Learning

The static ``train then deploy" paradigm fundamentally limits Large Language Models (LLMs) from dynamically adapting their weights in response to continuous streams of new information inherent in real-world tasks. Test-Time Training (TTT) offers a compelling alternative by updating a subset of model parameters (fast weights) at inference time, yet its potential in the current LLM ecosystem is hindered by critical barriers including architectural incompatibility, computational inefficiency and misaligned fast weight objectives for language modeling. In this work, we introduce In-Place Test-Time Training (In-Place TTT), a framework that seamlessly endows LLMs with Test-Time Training ability. In-Place TTT treats the final projection matrix of the ubiquitous MLP blocks as its adaptable fast weights, enabling a ``drop-in" enhancement for LLMs without costly retraining from scratch. Furthermore, we replace TTT's generic reconstruction objective with a tailored, theoretically-grounded objective explicitly aligned with the Next-Token-Prediction task governing autoregressive language modeling. This principled objective, combined with an efficient chunk-wise update mechanism, results in a highly scalable algorithm compatible with context parallelism. Extensive experiments validate our framework's effectiveness: as an in-place enhancement, it enables a 4B-parameter model to achieve superior performance on tasks with contexts up to 128k, and when pretrained from scratch, it consistently outperforms competitive TTT-related approaches. Ablation study results further provide deeper insights on our design choices. Collectively, our results establish In-Place TTT as a promising step towards a paradigm of continual learning in LLMs.


GV-Rep: A Large-Scale Dataset for Genetic Variant Representation Learning

Neural Information Processing Systems

The development of deep learning approaches for modeling these multifactorial effects of GVs is still in its nascent stages, primarily due to the lack of comprehensive datasets that capture the intricate relationships between GVs and their downstream effects on complex traits.


Many-shot Jailbreaking

Neural Information Processing Systems

Longer contexts present a new attack surface for adversarial attacks. In search of a "fruit-fly" of long-context vulnerabilities, we study Many-shot Jailbreaking (MSJ; Figure 1), a simple yet effective and scalable jailbreak.